In order to analyze video streams and extract particular features or patterns, such as object recognition, motion detection, facial recognition, or behavior analysis, video analysis employs computer vision and machine learning techniques. Analyzing video footage to spot suspicious activity or items and locate people or vehicles is part of the process. This approach comprises keeping an eye on and controlling traffic at crossings, on roads, and in highways. It entails reviewing video footage to find traffic jams and overcrowding and to offer prompt alerts or suggestions for crowd management.
Introduction
I. INTRODUCTION
In order to conduct their investigations, law enforcement officials frequently need to spend a significant amount of time manually reviewing CCTV material. In order to make their job easier, a video analysis system is required that can automatically scan through CCTV footage for known facilitators like criminals and potential suspects. This system aims to improve object tracking and detection accuracy both in real-time and using historical data. We are counting and locating recognized facilitators in addition to conducting real-time human detection. Real-time human detection uses either a live capture, static video, or snapshot for identification. We produce a report and analysis for it after detection.
III. METHODOLOGY
A. Proposed Work
This is a rudimentary flowchart for a computer vision-based people detection system:
Image or video Feed Capture: The system begins by taking a picture or video from a camera or sensor.
Preprocessing: To improve the quality of the collected image or video and get rid of noise or artefacts, the data is first processed.
Object Detection: In order to identify faces or human bodies in the image or video, the system uses an object detection technique like Haar cascades and Histogram Oriented Gradients.
Feature Extraction: The algorithm collects information including shape, size, color, and location after identifying human bodies or faces.
5. Classification: The system classifies the collected features as human or non-human using a machine learning method, such as support vector machines (SVM) or neural networks.
6. Alerting: If a human is found, the system notifies a monitoring station, security staff, or other pertinent parties by sending an alarm or notification.
7. Tracking: Using methods like optical flow or Kalman filters, the system may also follow the motion of any persons it has identified in an image or video over time.
8. Post processing: The system then applies postprocessing to the result, such as removing false positives or enhancing the precision of the categorization, and creates a final report.
When utilizing computer vision to recognize humans, preprocessing is crucial. These are a few typical preprocessing methods for human detection: Image normalization: is the process of enhancing an image's quality and making it better suited for human detecting algorithms by altering its brightness and contrast. Picture resizing: is the process of adjusting an image's dimensions while keeping its aspect ratio. When the image resolution needs to be changed to meet the specifications of the human detection algorithm because it is either too high or too low, this technique can be helpful. Noise reduction is essential for human detection systems because it gets rid of noise or other artefacts from an image or video input. The quality of the input data is increased by using methods like smoothing, median filtering, and Gaussian filtering to minimize noise.
Image enhancement By enhancing the contrast and edges in an image, one can make it simpler for human recognition algorithms to recognize the aspects of interest. Such approaches include contrast stretching, histogram equalization, and sharpness.
IV. RESULT
We have developed a system that can help identify known facilitators, and can identify human as well as keep a count of it. We have developed a interface where user can input image, video or he/she can make use of camera. After taking the input it's processes it, and extract features from it after which it detects the human.
V. FUTURE SCOPE
For future work, we can work more in image preprocessing as well improving our accuracy further. As well as we can work on to make a background running application could be made which keeps working without interfering and notifies when detects something abnormal. Accidents happening in public places could be detected. The model can further be trained in such a way that unusual events such as eve-teasing or involving any kind of violence could be detected. Can be used in the defense sector for security purposes.
VI. ACKNOWLEDGEMENT
We gratefully acknowledge our mentor Ms. Veena Kulkarni ma’am, for guiding and supporting us throughout this project work, as well as our Principal Sir, for providing us a platform to showcase our work.
Conclusion
The proposed system is vital in today\'s world. This system can be used for various purposes such as at public areas like streets, parks, transportation hub to get rid of crime as well as to assist in emergency response. People can use for their personal use to monitor any activity, prevent theft and vandalism, and ensure safety. Also this proposed system can help identify known facilitators in case of criminal identification, where police professional can input image and check in the surveillance video and identify the criminal.
References
[1] K. Jin, X. Xie, F. Wang, X. Han and G. Shi, \"Human Identification Recognition in Surveillance Videos,\" 2019 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), 2019, pp. 162-167.
[2] L. Fu and X. Shao, \"Reseach and Implementation of Face Detection, Tracking and Recognition Based on Video,\" 2020 International Conference on Intelligent Transportation, Big Data & Smart City (ICITBS), 2020, pp. 914-917.
[3] Zafar, U., Ghafoor, M., Zia, T. et al. Face recognition with Bayesian convolutional networks for robust surveillance systems. J Image Video Proc. 2019, 10 (2019).
[4] Sathiyavathi, V., M. Jessey, K. Selvakumar, and L. SaiRamesh. \"Smart Surveillance System for Abnormal Activity Detection Using CNN.\"Advances in Parallel Computing Technologies and Applications 40 (2021): 341..
[5] S. Manna, S. Ghildiyal and K. Bhimani, \"Face Recognition from Video using Deep Learning,\" 2020 5th International Conference on Communication and Electronics Systems (ICCES), 2020, pp. 1101-1106.
[6] M. Khan, S. Chakraborty, R. Astya and S. Khepra, \"Face Detection and Recognition Using OpenCV,\" 2019 International Conference on Computing, Communication, and Intelligent Systems (ICCCIS), 2019, pp. 116-119.
[7] A. Bharadwaj K H, Deepak, V. Ghanavanth, H. Bharadwaj R, R. Uma and G. Krishnamurthy, \"Smart CCTV Surveillance System for Intrusion Detection With Live Streaming,\" 2018 3rd IEEE International Conference on Recent Trends in Electronics, Information & Communication Technology (RTEICT), 2018, pp. 1030-1035.
[8] A. Hampapur, L. Brown, J. Connell, S. Pankanti, A. Senior and Y. Tian, \"Smart surveillance: applications, technologies and implications,\" Fourth International Conference on Information, Communications and Signal Processing, 2003 and the Fourth Pacific Rim Conference on Multimedia. Proceedings of the 2003 Joint, 2003, pp. 1133-1138 vol.2.
[9] Sukmandhani, Arief & Sutedja, Indrajani. (2019). Face Recognition Method for Online Exams. 175-179. 10.1109/ICIMTech.2019.8843831.
[10] Rusoke, Blaise & Musinguzi, Denis & Miyingo, Simon & Katumba, Andrew. (2022). Edge AI Face Recognition for Public Transport Fare Payment. 10.36227/techrxiv.21432000.
[11] HarikaPalivela, Lakshmi & P M, Ashok Kumar & Krishna, V.V.. (2022). Smart Surveillance System Using Face and Optical Character Recognition for Secure Environment. 10.3233/APC220054.
[12] Sathiyavathi, V. & M, Jessey & Kamalanathan, Selvakumar & Lakshmanan, Sairamesh. (2021). Smart Surveillance System for Abnormal Activity Detection Using CNN. 10.3233/APC210157.
[13] Dragos D. Margineantu, Weng-Keen Wong, and Denver Dash “Machine learning algorithms for event detection”, 2010.
[14] Lin Wang, Fuqiang Zhou, Zuoxin Li, Wangxia Zuo, and Haishu Tan “Abnormal Event Detection in Videos Using Hybrid Spatio-Temporal Autoencoder”, 25th IEEE International Conference on Image Processing, 2018.
[15] Roberto Levya, Victor Sanchez, and Chang- Tsun Li “The LV dataset: A realistic surveillance video dataset for abnormal event detection”, 5th International Workshop on Biometrics and Forensics (IWBF), 2017.
[16] Waqas Sultani, Mubarak Shah, and Chen Chen “Real-world Anamoly Detection in Surveillance Videos”, CVPR, 2018.
[17] Dinesh, S. & Kavitha, A.. (2023). Development of Algorithm for Person Re-Identification Using Extended Openface Method. Computer Systems Science and Engineering. 44. 545-561. 10.32604/csse.2023.024450.
[18] T. Baltrušaitis, P. Robinson and L. P. Morency, “3D constrained local model for rigid and non-rigid facial tracking,”in IEEE Conf. on Computer Vision and Pattern Recognition, USA, pp. 2610–2617, 2012.
[19] M. R. Golla and P. Sharma, “Performance evaluation of facenet on low-resolution face images,”in Int. Conf. onCommunication, Networks and Computing, Gwalior, India, pp. 317–325, 2018.
[20] T. Baltrušaitis, P. Robinson and L. P. Morency, “Openface: An open source facial behavior analysis toolkit,”inIEEE Winter Conf. on Applications of Computer Vision (WACV), NY, USA, pp. 1–10, 2016.
[21] A. Fydanaki and Z. Geradts, “Evaluating OpenFace: An open-source automatic facial comparison algorithm forforensics,”Forensic Sciences Research, vol. 3, no. 3, pp. 202–209, 2018.
[22] K. Santoso and G. P. Kusuma, “Face recognition using modi?ed open face,”Procedia Computer Science, vol.135, pp. 510–517, 2018
[23] K. Zhang, Z. Zhang, Z. Li and Y. Qiao, “Joint face detection and alignment using multitask cascadedconvolutional networks,”IEEE Signal Processing Letters, vol. 23, no. 10, pp. 1499–1503, 2016.
[24] M. Zeng, C. Tian and Z. Wu, “Person re-identi?cation with hierarchical deep learning feature and ef?cient xqdametric,”in Proc. of the 26th ACM Int. Conf. on Multimedia, Seoul Republic of Korea, pp. 1838–1846, 2018.
[25] W. Li, X. Zhu and S. Gong, “Person re-identi?cation by deep joint learning of multi-loss classi?cation,”in Proc.IJCAI-26, Melbourne, Australia, pp. 2194–2200, 2017.
[26] S. Li, S. Bak, P. Carr and X. Wang, “Diversity regularized spatiotemporal attention for video-based person re-identi?cation,”in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, San Juan, PR, USA,pp. 369–378, 2018.
[27] W. Zhang, S. Hu, K. Liu and Z. Zha, “Learning compact appearance representation for video-based person re-identi?cation,”IEEE Transactions on Circuits and Systems for Video Technology, vol. 29, no. 8, pp. 2442–2452, 2018.
[28] N. McLaughlin, J. M. Del Rincon and P. Miller, “Recurrent convolutional network for video-based person re-identi?cation,”in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, Las Vegas, NV, USA,pp. 1325–1334, 2016.
[29] Z. Zhou, Y. Huang, W. Wang, L. Wang and T. Tan, “See the forest for the trees: Joint spatial and temporal recurrentneural networks for video-based person re-identi?cation,”in Proc. of the IEEE Conf. on Computer Vision andPattern Recognition, Honolulu, HI, USA, pp. 4747–4756, 2017.
[30] D. Ouyang, J. Shao, Y. Zhang, Y. Yang and H. T. Shen, “Video-based person re-identi?cation via self-pacedlearning and deep reinforcement learning framework,”in Proc. of the 26th ACM Int. Conf. on Multimedia,Seoul Republic of Korea, pp. 1562–1570, 2018.